Learning Linear Classifiers Sensitive to Example Dependent and Noisy Costs

نویسندگان

  • Peter Geibel
  • Ulf Brefeld
  • Fritz Wysotzki
چکیده

Learning algorithms from the fields of artificial neural networks and machine learning, typically, do not take any costs into account or allow only costs depending on the classes of the examples that are used for learning. As an extension of class dependent costs, we consider costs that are example, i.e. feature and class dependent. We derive a cost-sensitive perceptron learning rule for non-separable classes, that can be extended to multi-modal classes (DIPOL) and present a natural cost-sensitive extension of the support vector machine (SVM).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Application of ensemble learning techniques to model the atmospheric concentration of SO2

In view of pollution prediction modeling, the study adopts homogenous (random forest, bagging, and additive regression) and heterogeneous (voting) ensemble classifiers to predict the atmospheric concentration of Sulphur dioxide. For model validation, results were compared against widely known single base classifiers such as support vector machine, multilayer perceptron, linear regression and re...

متن کامل

Perceptron Based Learning with Example Dependent and Noisy Costs

Learning algorithms from the fields of artificial neural networks and machine learning, typically, do not take any costs into account or allow only costs depending on the classes of the examples that are used for learning. As an extension of class dependent costs, we consider costs that are example, i.e. feature and class dependent. We derive a costsensitive perceptron learning rule for nonsepa...

متن کامل

Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers

Accurate, well-calibrated estimates of class membership probabilities are needed in many supervised learning applications, in particular when a cost-sensitive decision must be made about examples with example-dependent costs. This paper presents simple but successful methods for obtaining calibrated probability estimates from decision tree and naive Bayesian classifiers. Using the large and cha...

متن کامل

Theory of Optimizing Pseudolinear Performance Measures: Application to F-measure

State of the art classification algorithms are designed to minimize the misclassification error of the system, which is a linear function of the per-class false negatives and false positives. Nonetheless non-linear performance measures are widely used for the evaluation of learning algorithms. For example, F -measure is a commonly used non-linear performance measure in classification problems. ...

متن کامل

Efficient classification of noisy speech using neural networks

The classification of active speech vs. inactive speech in noisy speech is an important part of speech applications, typically in order to achieve a lower bit-rate. In this work, the error rates for raw classification (i.e. with no hangover mechanism) of noisy speech obtained with traditional classification algorithms are compared to the rates obtained with Neural Network classifiers, trained w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003